FCN-Transformer Feature Fusion for Polyp Segmentation

نویسندگان

چکیده

Abstract Colonoscopy is widely recognised as the gold standard procedure for early detection of colorectal cancer (CRC). Segmentation valuable two significant clinical applications, namely lesion and classification, providing means to improve accuracy robustness. The manual segmentation polyps in colonoscopy images time-consuming. As a result, use deep learning (DL) automation polyp has become important. However, DL-based solutions can be vulnerable overfitting resulting inability generalise captured by different colonoscopes. Recent transformer-based architectures semantic both achieve higher performance better than alternatives, however typically predict map $$\frac{h}{4}\times \frac{w}{4}$$ h 4 × w spatial dimensions $$h\times w$$ input image. To this end, we propose new architecture full-size which leverages strengths transformer extracting most important features primary branch, while compensating its limitations prediction with secondary fully convolutional branch. from branches are then fused final map. We demonstrate our method’s state-of-the-art respect mDice, mIoU, mPrecision, mRecall metrics, on Kvasir-SEG CVC-ClinicDB dataset benchmarks. Additionally, train model each these datasets evaluate other superior generalisation performance. Code available: https://github.com/CVML-UCLan/FCBFormer .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

STFCN: Spatio-Temporal FCN for Semantic Video Segmentation

This paper presents a novel method to involve both spatial and temporal features for semantic segmentation of street scenes. Current work on convolutional neural networks (CNNs) has shown that CNNs provide advanced spatial features supporting a very good performance of solutions for the semantic segmentation task. We investigate how involving temporal features also has a good effect on segmenti...

متن کامل

Feature fusion for basic behavior unit segmentation from video sequences

It has become increasingly popular to study animal behaviors with the assistance of video recordings. An automated video processing and behavior analysis system is desired to replace the traditional manual annotation. We propose a framework for automatic video based behavior analysis systems, which consists of four major modules: behavior modeling, feature extraction from video sequences, basic...

متن کامل

Polyp Segmentation in NBI Colonoscopy

Endoscopic screening of the colon (colonoscopy) is performed to prevent cancer and to support therapy. During intervention colon polyps are located, inspected and, if need be, removed by the investigator. We propose a segmentation algorithm as a part of an automatic polyp classification system for colonoscopic Narrow-Band images. Our approach includes multi-scale filtering for noise reduction, ...

متن کامل

A Generalized Motion Pattern and FCN based approach for retinal fluid detection and segmentation

SD-OCT is a non invasive cross sectional imaging modality useful for diagnosis of macular defects. Efficient detection and segmentation of the abnormalities seen as biomarkers in OCT can help in analyzing the progression of the disease and advising effective treatment for the associated disease. In this work we proposes a fully automated Generalized Motion Pattern(GMP) based segmentation method...

متن کامل

Feature Weighting for Segmentation

This paper proposes the use of feature weights to reveal the hierarchical nature of music audio. Feature weighting has been exploited in machine learning, but has not been applied to music audio segmentation. We describe both a global and a local approach to automatic feature weighting. The global approach assigns a single weighting to all features in a song. The local approach uses the local s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2022

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-12053-4_65